Create ombre_security_audit_openai.md#2648
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8372d79bf0
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| @@ -0,0 +1,187 @@ | |||
| # Adding Security and Audit Logging to OpenAI Apps with Ombre | |||
There was a problem hiding this comment.
Add registry.yaml entry for new article
This commit introduces a new cookbook document but does not register it in registry.yaml, so it will not be picked up by the cookbook publication pipeline and users will not see it on cookbook.openai.com. The repo review guidelines in AGENTS.md explicitly mark metadata sync for new content as a priority-0 requirement.
Useful? React with 👍 / 👎.
| \```bash | ||
| pip install git+https://github.com/pypl0/Ombre.git | ||
| pip install openai | ||
| \``` |
There was a problem hiding this comment.
Remove backslash escapes from code fences
The snippet fences are written as escaped backticks (for example \```bash), which renders literal backticks instead of fenced code blocks in Markdown. That breaks formatting for every code sample in this guide and makes examples harder to read and copy correctly.
Useful? React with 👍 / 👎.
| from ombre import Ombre | ||
|
|
||
| # Initialize with your OpenAI key | ||
| ai = Ombre(openai_key="your-openai-key") |
There was a problem hiding this comment.
Use environment variable for API key examples
The examples initialize Ombre with a literal openai_key string, which encourages hard-coded credential usage when readers adapt the sample. This conflicts with the repo guidance to document environment-variable-based secrets handling and increases the risk of accidental key leakage in downstream code.
Useful? React with 👍 / 👎.
Adds a practical guide showing how to add production-grade
security, hallucination detection, semantic caching, and
tamper-proof audit logging to OpenAI applications using
Ombre — an open source AI infrastructure layer.
Covers:
github.com/pypl0/Ombre